141 research outputs found
A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm
Computing problems that handle large amounts of data necessitate the use of
lossless data compression for efficient storage and transmission. We present a
novel lossless universal data compression algorithm that uses parallel
computational units to increase the throughput. The length- input sequence
is partitioned into blocks. Processing each block independently of the
other blocks can accelerate the computation by a factor of , but degrades
the compression quality. Instead, our approach is to first estimate the minimum
description length (MDL) context tree source underlying the entire input, and
then encode each of the blocks in parallel based on the MDL source. With
this two-pass approach, the compression loss incurred by using more parallel
units is insignificant. Our algorithm is work-efficient, i.e., its
computational complexity is . Its redundancy is approximately
bits above Rissanen's lower bound on universal compression
performance, with respect to any context tree source whose maximal depth is at
most . We improve the compression by using different quantizers for
states of the context tree based on the number of symbols corresponding to
those states. Numerical results from a prototype implementation suggest that
our algorithm offers a better trade-off between compression and throughput than
competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special
issue on Signal Processing for Big Data (expected publication date June
2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note:
substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a
typ
Mismatched Estimation in Large Linear Systems
We study the excess mean square error (EMSE) above the minimum mean square
error (MMSE) in large linear systems where the posterior mean estimator (PME)
is evaluated with a postulated prior that differs from the true prior of the
input signal. We focus on large linear systems where the measurements are
acquired via an independent and identically distributed random matrix, and are
corrupted by additive white Gaussian noise (AWGN). The relationship between the
EMSE in large linear systems and EMSE in scalar channels is derived, and closed
form approximations are provided. Our analysis is based on the decoupling
principle, which links scalar channels to large linear system analyses.
Numerical examples demonstrate that our closed form approximations are
accurate.Comment: 5 pages, 2 figure
Analysis of Approximate Message Passing with a Class of Non-Separable Denoisers
Approximate message passing (AMP) is a class of efficient algorithms for
solving high-dimensional linear regression tasks where one wishes to recover an
unknown signal \beta_0 from noisy, linear measurements y = A \beta_0 + w. When
applying a separable denoiser at each iteration, the performance of AMP (for
example, the mean squared error of its estimates) can be accurately tracked by
a simple, scalar iteration referred to as state evolution. Although separable
denoisers are sufficient if the unknown signal has independent and identically
distributed entries, in many real-world applications, like image or audio
signal reconstruction, the unknown signal contains dependencies between
entries. In these cases, a coordinate-wise independence structure is not a good
approximation to the true prior of the unknown signal. In this paper we assume
the unknown signal has dependent entries, and using a class of non-separable
sliding-window denoisers, we prove that a new form of state evolution still
accurately predicts AMP performance. This is an early step in understanding the
role of non-separable denoisers within AMP, and will lead to a characterization
of more general denoisers in problems including compressive image
reconstruction.Comment: 37 pages, 1 figure. A shorter version of this paper to appear in the
proceedings of ISIT 201
An Overview of Multi-Processor Approximate Message Passing
Approximate message passing (AMP) is an algorithmic framework for solving
linear inverse problems from noisy measurements, with exciting applications
such as reconstructing images, audio, hyper spectral images, and various other
signals, including those acquired in compressive signal acquisiton systems. The
growing prevalence of big data systems has increased interest in large-scale
problems, which may involve huge measurement matrices that are unsuitable for
conventional computing systems. To address the challenge of large-scale
processing, multiprocessor (MP) versions of AMP have been developed. We provide
an overview of two such MP-AMP variants. In row-MP-AMP, each computing node
stores a subset of the rows of the matrix and processes corresponding
measurements. In column- MP-AMP, each node stores a subset of columns, and is
solely responsible for reconstructing a portion of the signal. We will discuss
pros and cons of both approaches, summarize recent research results for each,
and explain when each one may be a viable approach. Aspects that are
highlighted include some recent results on state evolution for both MP-AMP
algorithms, and the use of data compression to reduce communication in the MP
network
Compressive Imaging via Approximate Message Passing with Image Denoising
We consider compressive imaging problems, where images are reconstructed from
a reduced number of linear measurements. Our objective is to improve over
existing compressive imaging algorithms in terms of both reconstruction error
and runtime. To pursue our objective, we propose compressive imaging algorithms
that employ the approximate message passing (AMP) framework. AMP is an
iterative signal reconstruction algorithm that performs scalar denoising at
each iteration; in order for AMP to reconstruct the original input signal well,
a good denoiser must be used. We apply two wavelet based image denoisers within
AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator"
(ABE), and the second is an adaptive Wiener filter; we call our AMP based
algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results
show that both AMP-ABE and AMP-Wiener significantly improve over the state of
the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener
offers lower mean square error (MSE) than existing compressive imaging
algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise
as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal
Proces
- …